Goto

Collaborating Authors

 oracle property


A Generalized Adaptive Joint Learning Framework for High-Dimensional Time-Varying Models

Chen, Baolin, Ran, Mengfei

arXiv.org Machine Learning

In modern biomedical and econometric studies, longitudinal processes are often characterized by complex time-varying associations and abrupt regime shifts that are shared across correlated outcomes. Standard functional data analysis (FDA) methods, which prioritize smoothness, often fail to capture these dynamic structural features, particularly in high-dimensional settings. This article introduces Adaptive Joint Learning (AJL), a hierarchical regularization framework designed to integrate functional variable selection with structural changepoint detection in multivariate time-varying coefficient models. Unlike standard simultaneous estimation approaches, we propose a theoretically grounded two-stage screening-and-refinement procedure. This framework first synergizes adaptive group-wise penalization with sure screening principles to robustly identify active predictors, followed by a refined fused regularization step that effectively borrows strength across multiple outcomes to detect local regime shifts. We provide a rigorous theoretical analysis of the estimator in the ultra-high-dimensional regime (p >> n). Crucially, we establish the sure screening consistency of the first stage, which serves as the foundation for proving that the refined estimator achieves the oracle property-performing as well as if the true active set and changepoint locations were known a priori. A key theoretical contribution is the explicit handling of approximation bias via undersmoothing conditions to ensure valid asymptotic inference. The proposed method is validated through comprehensive simulations and an application to Sleep-EDF data, revealing novel dynamic patterns in physiological states.


Semiparametric Differential Graph Models

Pan Xu, Quanquan Gu

Neural Information Processing Systems

In many cases of network analysis, it is more attractive to study how a network varies under different conditions than an individual static network. We propose a novel graphical model, namely Latent Differential Graph Model, where the networks under two different conditions are represented by two semiparametric elliptical distributions respectively, and the variation of these two networks ( i.e.,


Optimizing Feature Selection in Causal Inference: A Three-Stage Computational Framework for Unbiased Estimation

Yang, Tianyu, Noor-E-Alam, Md.

arXiv.org Machine Learning

Feature selection is an important but challenging task in causal inference for obtaining unbiased estimates of causal quantities. Properly selected features in causal inference not only significantly reduce the time required to implement a matching algorithm but, more importantly, can also reduce the bias and variance when estimating causal quantities. When feature selection techniques are applied in causal inference, the crucial criterion is to select variables that, when used for matching, can achieve an unbiased and robust estimation of causal quantities. Recent research suggests that balancing only on treatment-associated variables introduces bias while balancing on spurious variables increases variance. To address this issue, we propose an enhanced three-stage framework that shows a significant improvement in selecting the desired subset of variables compared to the existing state-of-the-art feature selection framework for causal inference, resulting in lower bias and variance in estimating the causal quantity. We evaluated our proposed framework using a state-of-the-art synthetic data across various settings and observed superior performance within a feasible computation time, ensuring scalability for large-scale datasets. Finally, to demonstrate the applicability of our proposed methodology using large-scale real-world data, we evaluated an important US healthcare policy related to the opioid epidemic crisis: whether opioid use disorder has a causal relationship with suicidal behavior.


Semiparametric Differential Graph Models

Neural Information Processing Systems

In many cases of network analysis, it is more attractive to study how a network varies under different conditions than an individual static network. We propose a novel graphical model, namely Latent Differential Graph Model, where the networks under two different conditions are represented by two semiparametric elliptical distributions respectively, and the variation of these two networks (i.e., differential graph) is characterized by the difference between their latent precision matrices. We propose an estimator for the differential graph based on quasi likelihood maximization with nonconvex regularization. We show that our estimator attains a faster statistical rate in parameter estimation than the state-of-the-art methods, and enjoys the oracle property under mild conditions. Thorough experiments on both synthetic and real world data support our theory.


Adaptive Lasso, Transfer Lasso, and Beyond: An Asymptotic Perspective

Takada, Masaaki, Fujisawa, Hironori

arXiv.org Machine Learning

This paper presents a comprehensive exploration of the theoretical properties inherent in the Adaptive Lasso and the Transfer Lasso. The Adaptive Lasso, a well-established method, employs regularization divided by initial estimators and is characterized by asymptotic normality and variable selection consistency. In contrast, the recently proposed Transfer Lasso employs regularization subtracted by initial estimators with the demonstrated capacity to curtail non-asymptotic estimation errors. A pivotal question thus emerges: Given the distinct ways the Adaptive Lasso and the Transfer Lasso employ initial estimators, what benefits or drawbacks does this disparity confer upon each method? This paper conducts a theoretical examination of the asymptotic properties of the Transfer Lasso, thereby elucidating its differentiation from the Adaptive Lasso. Informed by the findings of this analysis, we introduce a novel method, one that amalgamates the strengths and compensates for the weaknesses of both methods. The paper concludes with validations of our theory and comparisons of the methods via simulation experiments.


On the Sparse DAG Structure Learning Based on Adaptive Lasso

Xu, Danru, Gao, Erdun, Huang, Wei, Wang, Menghan, Song, Andy, Gong, Mingming

arXiv.org Artificial Intelligence

Learning the underlying Bayesian Networks (BNs), represented by directed acyclic graphs (DAGs), of the concerned events from purely-observational data is a crucial part of evidential reasoning. This task remains challenging due to the large and discrete search space. A recent flurry of developments followed NOTEARS[1] recast this combinatorial problem into a continuous optimization problem by leveraging an algebraic equality characterization of acyclicity. However, the continuous optimization methods suffer from obtaining non-spare graphs after the numerical optimization, which leads to the inflexibility to rule out the potentially cycle-inducing edges or false discovery edges with small values. To address this issue, in this paper, we develop a completely data-driven DAG structure learning method without a predefined value to post-threshold small values. We name our method NOTEARS with adaptive Lasso (NOTEARS-AL), which is achieved by applying the adaptive penalty method to ensure the sparsity of the estimated DAG. Moreover, we show that NOTEARS-AL also inherits the oracle properties under some specific conditions. Extensive experiments on both synthetic and a real-world dataset demonstrate that our method consistently outperforms NOTEARS.


Measuring tail risk at high-frequency: An $L_1$-regularized extreme value regression approach with unit-root predictors

Hambuckers, Julien, Sun, Li, Trapin, Luca

arXiv.org Machine Learning

We study tail risk dynamics in high-frequency financial markets and their connection with trading activity and market uncertainty. We introduce a dynamic extreme value regression model accommodating both stationary and local unit-root predictors to appropriately capture the time-varying behaviour of the distribution of high-frequency extreme losses. To characterize trading activity and market uncertainty, we consider several volatility and liquidity predictors, and propose a two-step adaptive $L_1$-regularized maximum likelihood estimator to select the most appropriate ones. We establish the oracle property of the proposed estimator for selecting both stationary and local unit-root predictors, and show its good finite sample properties in an extensive simulation study. Studying the high-frequency extreme losses of nine large liquid U.S. stocks using 42 liquidity and volatility predictors, we find the severity of extreme losses to be well predicted by low levels of price impact in period of high volatility of liquidity and volatility.


Machine Learning Advances for Time Series Forecasting

Masini, Ricardo P., Medeiros, Marcelo C., Mendes, Eduardo F.

arXiv.org Machine Learning

In this paper we survey the most recent advances in supervised machine learning and high-dimensional models for time series forecasting. We consider both linear and nonlinear alternatives. Among the linear methods we pay special attention to penalized regressions and ensemble of models. The nonlinear methods considered in the paper include shallow and deep neural networks, in their feed-forward and recurrent versions, and tree-based methods, such as random forests and boosted trees. We also consider ensemble and hybrid models by combining ingredients from different alternatives. Tests for superior predictive ability are briefly reviewed. Finally, we discuss application of machine learning in economics and finance and provide an illustration with high-frequency financial data.


Variable Selection with Copula Entropy

Jian, Ma

arXiv.org Machine Learning

Variable selection is of significant importance for classification and regression tasks in machine learning and statistical applications where both predictability and explainability are needed. In this paper, a Copula Entropy (CE) based method for variable selection which use CE based ranks to select variables is proposed. The method is both model-free and tuning-free. Comparison experiments between the proposed method and traditional variable selection methods, such as Stepwise Selection, regularized generalized linear models and Adaptive LASSO, were conducted on the UCI heart disease data. Experimental results show that CE based method can select the `right' variables out effectively and derive better interpretable results than traditional methods do without sacrificing accuracy performance. It is believed that CE based variable selection can help to build more explainable models.


Nonconvex Regularized Robust Regression with Oracle Properties in Polynomial Time

Pan, Xiaoou, Sun, Qiang, Zhou, Wen-Xin

arXiv.org Machine Learning

This paper investigates tradeoffs among optimization errors, statistical rates of convergence and the effect of heavy-tailed random errors for high-dimensional adaptive Huber regression with nonconvex regularization. When the additive errors in linear models have only bounded second moment, our results suggest that adaptive Huber regression with nonconvex regularization yields statistically optimal estimators that satisfy oracle properties as if the true underlying support set were known beforehand. Computationally, we need as many as O(log s + log log d) convex relaxations to reach such oracle estimators, where s and d denote the sparsity and ambient dimension, respectively. Numerical studies lend strong support to our methodology and theory.